Geometric camera calibration is often required for applications that understand the perspective of the image. We propose perspective fields as a representation that models the local perspective properties of an image. Perspective Fields contain per-pixel information about the camera view, parameterized as an up vector and a latitude value. This representation has a number of advantages as it makes minimal assumptions about the camera model and is invariant or equivariant to common image editing operations like cropping, warping, and rotation. It is also more interpretable and aligned with human perception. We train a neural network to predict Perspective Fields and the predicted Perspective Fields can be converted to calibration parameters easily. We demonstrate the robustness of our approach under various scenarios compared with camera calibration-based methods and show example applications in image compositing.
translated by 谷歌翻译
我们介绍了遮阳板,一个新的像素注释的新数据集和一个基准套件,用于在以自我为中心的视频中分割手和活动对象。遮阳板注释Epic-kitchens的视频,其中带有当前视频分割数据集中未遇到的新挑战。具体而言,我们需要确保像素级注释作为对象经历变革性相互作用的短期和长期一致性,例如洋葱被剥皮,切成丁和煮熟 - 我们旨在获得果皮,洋葱块,斩波板,刀,锅以及表演手的准确像素级注释。遮阳板引入了一条注释管道,以零件为ai驱动,以进行可伸缩性和质量。总共,我们公开发布257个对象类的272K手册语义面具,990万个插值密集口罩,67K手动关系,涵盖36小时的179个未修剪视频。除了注释外,我们还引入了视频对象细分,互动理解和长期推理方面的三个挑战。有关数据,代码和排行榜:http://epic-kitchens.github.io/visor
translated by 谷歌翻译
我们提出了一个简单的基线,用于直接估计两个图像之间的相对姿势(旋转和翻译,包括比例)。深度方法最近显示出很强的进步,但通常需要复杂或多阶段的体系结构。我们表明,可以将少数修改应用于视觉变压器(VIT),以使其计算接近八点算法。这种归纳偏见使一种简单的方法在多种环境中具有竞争力,通常在有限的数据制度中具有强劲的性能增长,从而实质上有所改善。
translated by 谷歌翻译
我们提出了一种从有限重叠的图像中对场景进行平面表面重建的方法。此重构任务是具有挑战性的,因为它需要共同推理单个图像3D重建,图像之间的对应关系以及图像之间的相对摄像头姿势。过去的工作提出了基于优化的方法。我们引入了一种更简单的方法,即平面形式,该方法使用应用于3D感知平面令牌的变压器执行3D推理。我们的实验表明,我们的方法比以前的工作更有效,并且几项3D特定的设计决策对于成功的成功至关重要。
translated by 谷歌翻译
我们从看不见的RGB图像提出了一种场景级3D重建,包括遮挡区域的方法。我们的方法是在真正的3D扫描和图像上培训。由于多种原因,这个问题已经证明很难;真正的扫描不是防水,禁止许多方法;场景中的距离需要推理跨对象(使其更加困难);并且,正如我们所示,表面位置的不确定性激励网络以产生缺少基本距离功能属性的输出。我们提出了一种新的距离样功能,可以在非结构化扫描上计算,并且在对表面位置的不确定性下具有良好的行为。计算此功能在光线上可进一步降低复杂性。我们训练一个深度网络来预测此功能,并显示出于TASTPORT3D,3D前面和SCANNET上的其他方法。
translated by 谷歌翻译
人类可以从少量的2D视图中从3D中感知场景。对于AI代理商,只有几个图像的任何视点识别场景的能力使它们能够有效地与场景及其对象交互。在这项工作中,我们试图通过这种能力赋予机器。我们提出了一种模型,它通过将新场景的几个RGB图像进行输入,并通过将其分割为语义类别来识别新的视点中的场景。所有这一切都没有访问这些视图的RGB图像。我们将2D场景识别与隐式3D表示,并从数百个场景的多视图2D注释中学习,而无需超出相机姿势的3D监督。我们试验具有挑战性的数据集,并展示我们模型的能力,共同捕捉新颖场景的语义和几何形状,具有不同的布局,物体类型和形状。
translated by 谷歌翻译
What is a good vector representation of an object? We believe that it should be generative in 3D, in the sense that it can produce new 3D objects; as well as be predictable from 2D, in the sense that it can be perceived from 2D images. We propose a novel architecture, called the TL-embedding network, to learn an embedding space with these properties. The network consists of two components: (a) an autoencoder that ensures the representation is generative; and (b) a convolutional network that ensures the representation is predictable. This enables tackling a number of tasks including voxel prediction from 2D images and 3D model retrieval. Extensive experimental analysis demonstrates the usefulness and versatility of this embedding.
translated by 谷歌翻译
In this paper, we propose a novel technique, namely INVALIDATOR, to automatically assess the correctness of APR-generated patches via semantic and syntactic reasoning. INVALIDATOR reasons about program semantic via program invariants while it also captures program syntax via language semantic learned from large code corpus using the pre-trained language model. Given a buggy program and the developer-patched program, INVALIDATOR infers likely invariants on both programs. Then, INVALIDATOR determines that a APR-generated patch overfits if: (1) it violates correct specifications or (2) maintains errors behaviors of the original buggy program. In case our approach fails to determine an overfitting patch based on invariants, INVALIDATOR utilizes a trained model from labeled patches to assess patch correctness based on program syntax. The benefit of INVALIDATOR is three-fold. First, INVALIDATOR is able to leverage both semantic and syntactic reasoning to enhance its discriminant capability. Second, INVALIDATOR does not require new test cases to be generated but instead only relies on the current test suite and uses invariant inference to generalize the behaviors of a program. Third, INVALIDATOR is fully automated. We have conducted our experiments on a dataset of 885 patches generated on real-world programs in Defects4J. Experiment results show that INVALIDATOR correctly classified 79% overfitting patches, accounting for 23% more overfitting patches being detected by the best baseline. INVALIDATOR also substantially outperforms the best baselines by 14% and 19% in terms of Accuracy and F-Measure, respectively.
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
Variational autoencoders model high-dimensional data by positing low-dimensional latent variables that are mapped through a flexible distribution parametrized by a neural network. Unfortunately, variational autoencoders often suffer from posterior collapse: the posterior of the latent variables is equal to its prior, rendering the variational autoencoder useless as a means to produce meaningful representations. Existing approaches to posterior collapse often attribute it to the use of neural networks or optimization issues due to variational approximation. In this paper, we consider posterior collapse as a problem of latent variable non-identifiability. We prove that the posterior collapses if and only if the latent variables are non-identifiable in the generative model. This fact implies that posterior collapse is not a phenomenon specific to the use of flexible distributions or approximate inference. Rather, it can occur in classical probabilistic models even with exact inference, which we also demonstrate. Based on these results, we propose a class of latent-identifiable variational autoencoders, deep generative models which enforce identifiability without sacrificing flexibility. This model class resolves the problem of latent variable non-identifiability by leveraging bijective Brenier maps and parameterizing them with input convex neural networks, without special variational inference objectives or optimization tricks. Across synthetic and real datasets, latent-identifiable variational autoencoders outperform existing methods in mitigating posterior collapse and providing meaningful representations of the data.
translated by 谷歌翻译